Goto

Collaborating Authors

 discovery and separation


Discovery and Separation of Features for Invariant Representation Learning

#artificialintelligence

Supervised machine learning models often associate irrelevant nuisance factors with the prediction target, which hurts generalization. We propose a framework for training robust neural networks that induces invariance to nuisances through learning to discover and separate predictive and nuisance factors of data. We present an information theoretic formulation of our approach, from which we derive training objectives and its connections with previous methods. Empirical results on a wide array of datasets show that the proposed framework achieves state-of-the-art performance, without requiring nuisance annotations during training.


Discovery and Separation of Features for Invariant Representation Learning

Jaiswal, Ayush, Brekelmans, Rob, Moyer, Daniel, Steeg, Greg Ver, AbdAlmageed, Wael, Natarajan, Premkumar

arXiv.org Machine Learning

Supervised machine learning models often associate irrelevant nuisance factors with the prediction target, which hurts generalization. We propose a framework for training robust neural networks that induces invariance to nuisances through learning to discover and separate predictive and nuisance factors of data. We present an information theoretic formulation of our approach, from which we derive training objectives and its connections with previous methods. Empirical results on a wide array of datasets show that the proposed framework achieves state-of-the-art performance, without requiring nuisance annotations during training.